Existing learning-based single-image deraining networks mostly focus on the effect of rain streaks in rainy images on visual imaging, while ignoring the effect of fog on visual imaging due to the increase of humidity in the air in rainy environments, thus causing problems such as low generation quality and blurred texture detail information in the derained images. To address these problems, an asymmetric unsupervised end-to-end image deraining network model was proposed. It mainly consists of rain and fog removal network, rain and fog feature extraction network and rain and fog generation network, which form two different data domain mapping conversion modules: Rain-Clean-Rain and Clean-Rain-Clean. The above three sub-networks constituted two parallel transformation paths: the rain removal path and the rain-fog feature extraction path. In the rain-fog feature extraction path, a rain-fog-aware extraction network based on global and local attention mechanisms was proposed to learn rain-fog related features by using the global self-similarity and local discrepancy existing in rain-fog features. In the rain removal path, a rainy image degradation model and the above extracted rain-fog related features were introduced as priori knowledge to enhance the ability of rain-fog image generation, so as to constrain the rain-fog removal network and improve its mapping conversion capability from rain data domain to rain-free data domain. Extensive experiments on different rain image datasets show that compared to state-of-the-art deraining method CycleDerain, the Peak Signal-to-Noise Ratio (PSNR) is improved by 31.55% on the synthetic rain-fog dataset HeavyRain. The proposed model can adapt to different rainy scenarios, has better generalization, and can better recover the details and texture information of images.